167 research outputs found

    Functionality-power-packaging considerations in context aware wearable systems

    Get PDF
    Wearable computing places tighter constraints on architecture design than traditional mobile computing. The architecture is described in terms of miniaturization, power-awareness, global low-power design and suitability for an application. In this article we present a new methodology based on three different system properties. Functionality, power and electronic Packaging metrics are proposed and evaluated to study different trade offs. We analyze the trade offs in different context recognition scenarios. The proof of concept case study is analyzed by studying (a) interaction with household appliances by a wrist worn device (acceleration, light sensors) (b) studying walking behavior with acceleration sensors, (c) computational task and (d) gesture recognition in a wood-workshop using the combination of accelerometer and microphone sensors. After analyzing the case study, we highlight the size aspect by electronic packaging for a given functionality and present the miniaturization trends for ‘autonomous sensor button

    Learning from the Best: Contrastive Representations Learning Across Sensor Locations for Wearable Activity Recognition

    Full text link
    We address the well-known wearable activity recognition problem of having to work with sensors that are non-optimal in terms of information they provide but have to be used due to wearability/usability concerns (e.g. the need to work with wrist-worn IMUs because they are embedded in most smart watches). To mitigate this problem we propose a method that facilitates the use of information from sensors that are only present during the training process and are unavailable during the later use of the system. The method transfers information from the source sensors to the latent representation of the target sensor data through contrastive loss that is combined with the classification loss during joint training. We evaluate the method on the well-known PAMAP2 and Opportunity benchmarks for different combinations of source and target sensors showing average (over all activities) F1 score improvements of between 5% and 13% with the improvement on individual activities, particularly well suited to benefit from the additional information going up to between 20% and 40%.Comment: Presented at Ubicomp/ISWC 202

    Compiling machine-independent parallel programs

    Get PDF

    TASKED: Transformer-based Adversarial learning for human activity recognition using wearable sensors via Self-KnowledgE Distillation

    Full text link
    Wearable sensor-based human activity recognition (HAR) has emerged as a principal research area and is utilized in a variety of applications. Recently, deep learning-based methods have achieved significant improvement in the HAR field with the development of human-computer interaction applications. However, they are limited to operating in a local neighborhood in the process of a standard convolution neural network, and correlations between different sensors on body positions are ignored. In addition, they still face significant challenging problems with performance degradation due to large gaps in the distribution of training and test data, and behavioral differences between subjects. In this work, we propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED), that accounts for individual sensor orientations and spatial and temporal features. The proposed method is capable of learning cross-domain embedding feature representations from multiple subjects datasets using adversarial learning and the maximum mean discrepancy (MMD) regularization to align the data distribution over multiple domains. In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition. Experimental results show that TASKED not only outperforms state-of-the-art methods on the four real-world public HAR datasets (alone or combined) but also improves the subject generalization effectively.Comment: 17 pages, 5 figures, Submitted to Knowledge-Based Systems, Elsevier. arXiv admin note: substantial text overlap with arXiv:2110.1216

    MeciFace: Mechanomyography and Inertial Fusion based Glasses for Edge Real-Time Recognition of Facial and Eating Activities

    Full text link
    The increasing prevalence of stress-related eating behaviors and their impact on overall health highlights the importance of effective monitoring systems. In this paper, we present MeciFace, an innovative wearable technology designed to monitor facial expressions and eating activities in real-time on-the-edge (RTE). MeciFace aims to provide a low-power, privacy-conscious, and highly accurate tool for promoting healthy eating behaviors and stress management. We employ lightweight convolutional neural networks as backbone models for facial expression and eating monitoring scenarios. The MeciFace system ensures efficient data processing with a tiny memory footprint, ranging from 11KB to 19KB. During RTE evaluation, the system achieves impressive performance, yielding an F1-score of < 86% for facial expression recognition and 90% for eating/drinking monitoring, even for the RTE of an unseen user.Comment: Submitted to Nature Scientific Report
    • …
    corecore